7 research outputs found

    An Evaluation of Calibration Methods for Data Mining Models in Simulation Problems

    Full text link
    Data mining is useful in making single decisions. The problem is when there are several related problems and the best local decisions do not make the best global result. We propose to calibrate each local data mining models in order to obtain accurate models, and to use simulation to merge the local models and obtain a good overall result.Bella Sanjuán, A. (2008). An Evaluation of Calibration Methods for Data Mining Models in Simulation Problems. http://hdl.handle.net/10251/13631Archivo delegad

    Aggregative quantification for regression

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s10618-013-0308-zThe problem of estimating the class distribution (or prevalence) for a new unlabelled dataset (from a possibly different distribution) is a very common problem which has been addressed in one way or another in the past decades. This problem has been recently reconsidered as a new task in data mining, renamed quantification when the estimation is performed as an aggregation (and possible adjustment) of a single-instance supervised model (e.g., a classifier). However, the study of quantification has been limited to classification, while it is clear that this problem also appears, perhaps even more frequently, with other predictive problems, such as regression. In this case, the goal is to determine a distribution or an aggregated indicator of the output variable for a new unlabelled dataset. In this paper, we introduce a comprehensive new taxonomy of quantification tasks, distinguishing between the estimation of the whole distribution and the estimation of some indicators (summary statistics), for both classification and regression. This distinction is especially useful for regression, since predictions are numerical values that can be aggregated in many different ways, as in multi-dimensional hierarchical data warehouses. We focus on aggregative quantification for regression and see that the approaches borrowed from classification do not work. We present several techniques based on segmentation which are able to produce accurate estimations of the expected value and the distribution of the output variable. We show experimentally that these methods especially excel for the relevant scenarios where training and test distributions dramatically differ.We would like to thank the anonymous reviewers for their careful reviews, insightful comments and very useful suggestions. This work was supported by the MEC/MINECO projects CONSOLIDER-INGENIO CSD2007-00022 and TIN 2010-21062-C02-02, GVA project PROME-TEO/2008/051, the COST-European Cooperation in the field of Scientific and Technical Research IC0801 AT, and the REFRAME project granted by the European Coordinated Research on Long-term Challenges in Information and Communication Sciences & Technologies ERA-Net (CHIST-ERA), and funded by the Ministerio de Economia y Competitividad in Spain.Bella Sanjuán, A.; Ferri Ramírez, C.; Hernández Orallo, J.; Ramírez Quintana, MJ. (2014). Aggregative quantification for regression. Data Mining and Knowledge Discovery. 28(2):475-518. https://doi.org/10.1007/s10618-013-0308-zS475518282Alonzo TA, Pepe MS, Lumley T (2003) Estimating disease prevalence in two-phase studies. Biostatistics 4(2):313–326Anderson T (1962) On the distribution of the two-sample Cramer–von Mises criterion. Ann Math Stat 33(3):1148–1159Bakar AA, Othman ZA, Shuib NLM (2009) Building a new taxonomy for data discretization techniques. In: Proceedings of 2nd conference on data mining and optimization (DMO’09), pp 132–140Bella A, Ferri C, Hernández-Orallo J, Ramírez-Quintana MJ (2009a) Calibration of machine learning models. In: Handbook of research on machine learning applications. IGI Global, HersheyBella A, Ferri C, Hernández-Orallo J, Ramírez-Quintana MJ (2009b) Similarity-binning averaging: a generalisation of binning calibration. In: International conference on intelligent data engineering and automated learning. LNCS, vol 5788. Springer, Berlin, pp 341–349Bella A, Ferri C, Hernández-Orallo J, Ramírez-Quintana MJ (2010) Quantification via probability estimators. In: International conference on data mining, ICDM2010, pp 737–742Bella A, Ferri C, Hernández-Orallo J, Ramírez-Quintana MJ (2012) On the effect of calibration in classifier combination. Appl Intell. doi: 10.1007/s10489-012-0388-2Chan Y, Ng H (2006) Estimating class priors in domain adaptation for word sense disambiguation. In: Proceedings of the 21st international conference on computational linguistics and the 44th annual meeting of the Association for Computational Linguistics, pp 89–96Chawla N, Japkowicz N, Kotcz A (2004) Editorial: special issue on learning from imbalanced data sets. ACM SIGKDD Explor Newsl 6(1):1–6Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30Dougherty J, Kohavi R, Sahami M (1995) Supervised and unsupervised discretization of continuous features. In: Prieditis A, Russell S (eds) Proceedings of the twelfth international conference on machine learning. Morgan Kaufmann, San Francisco, pp 194–202Ferri C, Hernández-Orallo J, Modroiu R (2009) An experimental comparison of performance measures for classification. Pattern Recogn Lett 30(1):27–38Flach P (2012) Machine learning: the art and science of algorithms that make sense of data. Cambridge University Press, CambridgeForman G (2005) Counting positives accurately despite inaccurate classification. In: Proceedings of the 16th European conference on machine learning (ECML), pp 564–575Forman G (2006) Quantifying trends accurately despite classifier error and class imbalance. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining (KDD), pp 157–166Forman G (2008) Quantifying counts and costs via classification. Data Min Knowl Discov 17(2):164–206Frank A, Asuncion A (2010) UCI machine learning repository. http://archive.ics.uci.edu/mlGonzález-Castro V, Alaiz-Rodríguez R, Alegre E (2012) Class distribution estimation based on the Hellinger distance. Inf Sci 218(1):146–164Hastie TJ, Tibshirani R, Friedman J (2009) The elements of statistical learning: data mining, inference, and prediction. Springer, BerlinHernández-Orallo J, Flach P, Ferri C (2012) A unified view of performance metrics: translating threshold choice into expected classification loss. J Mach Learn Res (JMLR) 13:2813–2869Hodges J, Lehmann E (1963) Estimates of location based on rank tests. Ann Math Stat 34(5):598–611Hosmer DW, Lemeshow S (2000) Applied logistic regression. Wiley, New YorkHwang JN, Lay SR, Lippman A (1994) Nonparametric multivariate density estimation: a comparative study. IEEE Trans Signal Process 42(10):2795–2810Hyndman RJ, Bashtannyk DM, Grunwald GK (1996) Estimating and visualizing conditional densities. J Comput Graph Stat 5(4):315–336Moreno-Torres J, Raeder T, Alaiz-Rodríguez R, Chawla N, Herrera F (2012) A unifying view on dataset shift in classification. Pattern Recogn 45(1):521–530Neyman J (1938) Contribution to the theory of sampling human populations. J Am Stat Assoc 33(201):101–116Platt JC (1999) Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In: Advances in large margin classifiers. MIT Press, Cambridge, pp 61–74Raeder T, Forman G, Chawla N (2012) Learning from imbalanced data: evaluation matters. Data Min 23:315–331Sánchez L, González V, Alegre E, Alaiz R (2008) Classification and quantification based on image analysis for sperm samples with uncertain damaged/intact cell proportions. In: Proceedings of the 5th international conference on image analysis and recognition. LNCS, vol 5112. Springer, Heidelberg, pp 827–836Sturges H (1926) The choice of a class interval. J Am Stat Assoc 21(153):65–66Team R et al (2012) R: a language and environment for statistical computing. R Foundation for Statistical Computing, ViennaTenenbein A (1970) A double sampling scheme for estimating from binomial data with misclassifications. J Am Stat Assoc 65(331):1350–1361Weiss G (2004) Mining with rarity: a unifying framework. ACM SIGKDD Explor Newsl 6(1):7–19Weiss G, Provost F (2001) The effect of class distribution on classifier learning: an empirical study. Technical Report ML-TR-44Witten IH, Frank E (2005) Data mining: practical machine learning tools and techniques with Java implementations. Elsevier, AmsterdamXiao Y, Gordon A, Yakovlev A (2006a) A C++ program for the Cramér–von Mises two-sample test. J Stat Softw 17:1–15Xiao Y, Gordon A, Yakovlev A (2006b) The L1-version of the Cramér-von Mises test for two-sample comparisons in microarray data analysis. EURASIP J Bioinform Syst Biol 2006:85769Xue J, Weiss G (2009) Quantification and semi-supervised classification methods for handling changes in class distribution. In: Proceedings of the 15th ACM SIGKDD international conference on knowledge discovery and data mining, pp 897–906Yang Y (2003) Discretization for naive-bayes learning. PhD thesis, Monash UniversityZadrozny B, Elkan C (2001) Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In: Proceedings of the 8th international conference on machine learning (ICML), pp 609–616Zadrozny B, Elkan C (2002) Transforming classifier scores into accurate multiclass probability estimates. In: The 8th ACM SIGKDD international conference on knowledge discovery and data mining, pp 694–69

    Model Integration in Data Mining: From Local to Global Decisions

    Full text link
    El aprendizaje autom�atico es un �area de investigaci�on que proporciona algoritmos y t�ecnicas que son capaces de aprender autom�aticamente a partir de experiencias pasadas. Estas t�ecnicas son esenciales en el �area de descubrimiento de conocimiento de bases de datos (KDD), cuya fase principal es t�ÿpicamente conocida como miner�ÿa de datos. El proceso de KDD se puede ver como el aprendizaje de un modelo a partir de datos anteriores (generaci�on del modelo) y la aplicaci�on de este modelo a nuevos datos (utilizaci�on del modelo). La fase de utilizaci�on del modelo es muy importante, porque los usuarios y, muy especialmente, las organizaciones toman las decisiones dependiendo del resultado de los modelos. Por lo general, cada modelo se aprende de forma independiente, intentando obtener el mejor resultado (local). Sin embargo, cuando varios modelos se usan conjuntamente, algunos de ellos pueden depender los unos de los otros (por ejemplo, las salidas de un modelo pueden ser las entradas de otro) y aparecen restricciones. En este escenario, la mejor decisi�on local para cada problema tratado individualmente podr�ÿa no dar el mejor resultado global, o el resultado obtenido podr�ÿa no ser v�alido si no cumple las restricciones del problema. El �area de administraci�on de la relaci�on con los clientes (CRM) ha dado origen a problemas reales donde la miner�ÿa de datos y la optimizaci�on (global) deben ser usadas conjuntamente. Por ejemplo, los problemas de prescripci�on de productos tratan de distinguir u ordenar los productos que ser�an ofrecidos a cada cliente (o sim�etricamente, elegir los clientes a los que se les deber�ÿa de ofrecer los productos). Estas �areas (KDD, CRM) carecen de herramientas para tener una visi�on m�as completa de los problemas y una mejor integraci�on de los modelos de acuerdo a sus interdependencias y las restricciones globales y locales. La aplicaci�on cl�asica de miner�ÿa de datos a problemas de prescripci�on de productos, por lo general, haBella Sanjuán, A. (2012). Model Integration in Data Mining: From Local to Global Decisions [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16964Palanci

    On the effect of calibration in classifier combination

    Full text link
    A general approach to classifier combination considers each model as a probabilistic classifier which outputs a class membership posterior probability. In this general scenario, it is not only the quality and diversity of the models which are relevant, but the level of calibration of their estimated probabilities as well. In this paper, we study the role of calibration before and after classifier combination, focusing on evaluation measures such as MSE and AUC, which better account for good probability estimation than other evaluation measures. We present a series of findings that allow us to recommend several layouts for the use of calibration in classifier combination. We also empirically analyse a new non-monotonic calibration method that obtains better results for classifier combination than other monotonic calibration methods.We thank the anonymous reviewers for their comments, which have helped to improve this paper significantly. This work was supported by the MEC/MINECO projects CONSOLIDER-INGENIO CSD2007-00022, COST action IC0801 and TIN 2010-21062-C02-02, GVA project PROMETEO/2008/051, and the RE-FRAME project granted by the European Coordinated Research on Long-term Challenges in Information and Communication Sciences & Technologies ERA-Net (CHIST-ERA), and funded by the Ministerio de Economia y Competitividad in Spain.Bella Sanjuán, A.; Ferri Ramírez, C.; José Hernández-Orallo; Ramírez Quintana, MJ. (2012). On the effect of calibration in classifier combination. Applied Intelligence. 38(4):566-585. https://doi.org/10.1007/s10489-012-0388-2S566585384Amemiya T (1973) Regression analysis when the dependent variable is truncated normal. Econometrica 41(6):997–1016Ayer M, Brunk H, Ewing G, Reid W, Silverman E (1955) An empirical distribution function for sampling with incomplete information. Ann Math Stat 5:641–647Bella A, Ferri C, Hernandez-Orallo J, Ramirez-Quintana M (2009) Calibration of machine learning models. In: Handbook of research on machine learning applications. IGI Global, Hershey, pp 128–146Bella A, Ferri C, Hernández-Orallo J, Ramírez-Quintana M (2009) Similarity-binning averaging: a generalisation of binning calibration. In: Intelligent data engineering and automated learning—IDEAL 2009. Lecture notes in computer science, vol 5788. Springer, Berlin/Heidelberg, pp 341–349Bennett PN (2006) Building reliable metaclassifiers for text learning. PhD thesis, Carnegie Mellon UniversityBennett PN, Dumais ST, Horvitz E (2005) The combination of text classifiers using reliability indicators. Inf Retr 8(1):67–98Blake C, Merz C (1998) UCI repository of machine learning databases. http://www.ics.uci.edu/~mlearn/MLRepository.htmlBreiman L (1996) Bagging predictors. Mach Learn 24:123–140Brier G (1950) Verification of forecasts expressed in terms of probabilities. Mon Weather Rev 78:1–3Brümmer N (2010) Measuring, refining and calibrating speaker and language information extracted from speech. PhD thesis, University of StellenboschCanuto A, Santos A, Vargas R (2011) Ensembles of artmap-based neural networks: an experimental study. Appl Intell 35:1–17Caruana R, Munson A, Mizil AN (2006) Getting the most out of ensemble selection. In: ICDM ’06: proceedings of the sixth international conference on data mining. IEEE Computer Society, Washington, pp 828–833Caruana R, Niculescu-Mizil A (2004) Data mining in metric space: an empirical analysis of supervised learning performance criteria. In: Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’04. ACM Press, New York, pp 69–78Cohen I, Goldszmidt M (2004) Properties and benefits of calibrated classifiers. In: Proceedings of the 8th European conference on principles and practice of knowledge discovery in databases, PKDD ’04. Springer, Berlin, pp 125–136Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30Dietterich TG (2000) Ensemble methods in machine learning. In: Proceedings of the first international workshop on multiple classifier systems, MCS ’00. Springer, London, pp 1–15Dietterich TG (2000) An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Mach Learn 40:139–157Fahim M, Fatima I, Lee S, Lee Y (2012) Eem: evolutionary ensembles model for activity recognition in smart homes. Appl Intell, 1–11. doi: 10.1007/s10489-012-0359-7Ferri C, Flach P, Hernández-Orallo J (2004) Delegating classifiers. In: Proceedings of the twenty-first international conference on machine learning, ICML ’04. ACM Press, New York, pp 37–45Ferri C, Hernández-Orallo J, Modroiu R (2009) An experimental comparison of performance measures for classification. Pattern Recognit Lett 30:27–38Ferri C, Hernández-Orallo J, Salido M (2003) Volume under the ROC surface for multi-class problems. Exact computation and evaluation of approximations. In: Proceedings of 14th European conference on machine learning, pp 108–120Flach P, Blockeel H, Ferri C, Hernández-Orallo J, Struyf J (2003) Decision support for data mining: an introduction to ROC analysis and its applications. In: Data mining and decision support: integration and collaboration. Kluwer Academic, Boston, pp 81–90Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: International conference on machine learning, pp 148–156Gama J, Brazdil P (2000) Cascade generalization. Mach Learn 41:315–343Garczarek U (2002) Classification rules in standardized partition spaces. PhD thesis, Universitat DortmundGebel M (2009) Multivariate calibration of classifier scores into the probability space. PhD thesis, University of DortmundHand DJ, Till RJ (2001) A simple generalisation of the area under the ROC curve for multiple class classification problems. Mach Learn 45:171–186Hoeting JA, Madigan D, Raftery AE, Volinsky CT (1999) Bayesian model averaging: a tutorial. Stat Sci 14(4):382–417Khor K, Ting C, Phon-Amnuaisuk S (2012) A cascaded classifier approach for improving detection rates on rare attack categories in network intrusion detection. Appl Intell 36:320–329Kuncheva LI (2002) A theoretical study on six classifier fusion strategies. IEEE Trans Pattern Anal Mach Intell 24:281–286Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley-Interscience, New YorkKuncheva LI (2005) Diversity in multiple classifier systems. Inf Fusion 6(1):3–4Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51:181–207Lee H, Kim E, Pedrycz W (2012) A new selective neural network ensemble with negative correlation. Appl Intell, 1–11. doi: 10.1007/s10489-012-0342-3Maudes J, Rodríguez J, García-Osorio C, Pardo C (2011) Random projections for linear svm ensembles. Appl Intell 34:347–359Murphy AH (1972) Scalar and vector partitions of the probability score: part II. n-State situation. J Appl Meteorol 11:1182–1192Platt JC (1999) Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In: Advances in large margin classifiers. MIT Press, Boston, pp 61–74Raftery AE, Gneiting T, Balabdaoui F, Polakowski M (2005) Using Bayesian model averaging to calibrate forecast ensembles. Monthly Weather Rev, p 133Rifkin R, Klautau A (2004) In defense of one-vs-all classification. J Mach Learn Res 5:101–141Robertson T, Wright FT, Dykstra RL (1988) Order restricted statistical inference. Wiley, New YorkSouza L, Pozo A, Rosa J, Neto A (2010) Applying correlation to enhance boosting technique using genetic programming as base learner. Appl Intell 33:291–301Tulyakov S, Jaeger S, Govindaraju V, Doermann D (2008) Review of classifier combination methods. In: Marinai HFS (ed) Studies in computational intelligence: machine learning in document analysis and recognition. Springer, Berlin, pp 361–386Verma B, Hassan S (2011) Hybrid ensemble approach for classification. Appl Intell 34:258–278Wang C, Hunter A (2010) A low variance error boosting algorithm. Appl Intell 33:357–369Witten IH, Frank E (2002) Data mining: practical machine learning tools and techniques with java implementations. SIGMOD Rec 31:76–77Wolpert DH (1992) Stacked generalization. Neural Netw 5:241–259Zadrozny B, Elkan C (2002) Transforming classifier scores into accurate multiclass probability estimates. In: Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’02. ACM Press, New York, pp 694–69

    Using negotiable features for prescription problems

    Full text link
    Data mining is usually concerned on the construction of accurate models from data, which are usually applied to well-defined problems that can be clearly isolated and formulated independently from other problems. Although much computational effort is devoted for their training and statistical evaluation, model deployment can also represent a scientific problem, when several data mining models have to be used together, constraints appear on their application, or they have to be included in decision processes based on different rules, equations and constraints. In this paper we address the problem of combining several data mining models for objects and individuals in a common scenario, where not only we can affect decisions as the result of a change in one or more data mining models, but we have to solve several optimisation problems, such as choosing one or more inputs to get the best overall result, or readjusting probabilities after a failure. We illustrate the point in the area of customer relationship management (CRM), where we deal with the general problem of prescription between products and customers. We introduce the concept of negotiable feature, which leads to an extended taxonomy of CRM problems of greater complexity, since each new negotiable feature implies a new degree of freedom. In this context, we introduce several new problems and techniques, such as data mining model inversion (by ranging on the inputs or by changing classification problems into regression problems by function inversion), expected profit estimation and curves, global optimisation through a Monte Carlo method, and several negotiation strategies in order to solve this maximisation problem.This work has been partially supported by the EU (FEDER) and the Spanish MEC/MICINN, under grant TIN 2007-68093-C02, the Spanish project "Agreement Technologies" (Consolider Ingenio CSD2007-00022) and the GVA project PROMETEO/2008/051.Bella Sanjuán, A.; Ferri Ramírez, C.; Hernández Orallo, J.; Ramírez Quintana, MJ. (2011). Using negotiable features for prescription problems. Computing. 91(2):135-168. https://doi.org/10.1007/s00607-010-0129-5S135168912Adomavicius G, Tuzhilin A (2005) Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Trans Knowl Data Eng 17(6): 734–749Bella A, Ferri C, Hernández-Orallo J, Ramírez-Quintana MJ (2007) Joint cutoff probabilistic estimation using simulation: a mailing campaign application. In: IDEAL, volume 4881 of LNCS. Springer, New York, pp 609–619Berry MJA, Linoff GS (1999) Mastering data mining: the art and science of customer relationship management. Wiley, New YorkBerson A, Smith S, Thearling K (2000) Building data mining applications for CRM. McGraw HillBertsekas DP (2005) Dynamic programming and optimal control, 3rd edn. Massachusetts Institute of TechnologyBetter M, Glover F, Laguna M (2007) Advances in analytics: integrating dynamic data mining with simulation optimization. IBM J Res Dev 51(3): 477–487Bradley N (2007) Marketing research. Tools and techniques. Oxford University PressCarbo J, Ledezma A (2003) A machine learning based evaluation of a negotiation between agents involving fuzzy counter-offers. In: AWIC, pp 268–277Devroye L, Györfi L, Lugosi G (1997) A probabilistic theory of pattern recognition. In: Stochastic modelling and applied probability. Springer, New YorkFerri C, Flach PA, Hernández-Orallo J (2003) Improving the AUC of probabilistic estimation trees. In: 14th European conference on machine learning, proceedings. Springer, New York, pp 121–132Ferri C, Hernández-Orallo J, Modroiu R (2009) An experimental comparison of performance measures for classification. Pattern Recogn Lett 30(1): 27–38Fudenberg D, Tirole J (1991) Game theory. MIT Press, CambridgeGustafsson A, Herrmann A, Huber F (2000) Conjoint analysis as an instrument of market research practice. In: Conjoint measurement. Springer, Berlin, pp 3–30Han J, Kamber M (2006) Data mining: concepts and techniques. Morgan Kaufmann, San MateoHeckman JJ (1979) Sample selection bias as a specification error. Econometrica 47: 153–161Jennings NR, Faratin P, Lomuscio AR, Parsons S, Wooldridge MJ, Sierra C (2001) Automated negotiation: prospects, methods and challenges. Group Decis Negot 10(2): 199–215Kilian C (2005) Modern control technology. Thompson Delmar LearningLi S, Sun B, Wilcox RT (2005) Cross-selling sequentially ordered products: an application to consumer banking services. J Mark Res 42: 233–239Metropolis N, Ulam S (1949) The Monte Carlo method. J Am Stat Assoc 44: 335–341Padmanabhan B, Tuzhilin A (2003) On the use of optimization for data mining: theoretical interactions and ecrm opportunities. Manage Sci 49(10, Special Issue on E-Business and Management Science):1327–1343Peterson M (2009) An introduction to decision theory. Cambridge University Press, CambridgePrinzie A, Van den Poel D (2005) Constrained optimization of data-mining problems to improve model performance: a direct-marketing application. Expert Syst Appl 29(3): 630–640Prinzie A, Van den Poel D (2006) Exploiting randomness for feature selection in multinomial logit: a crm cross-sell application. In: Advances in data mining. Lecture notes in computer science, vol 4065. Springer, Berlin, pp 310–323Puterman ML (1994) Markov decision processes. Wiley, New YorkQuinlan JR (1992) Learning with continuous classes. In: 5th Australian joint conference on artificial intelligence. World Scientific, Singapore, pp 343–348Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann, San MateoRussell SJ, Norvig P (2003) Artificial intelligence: a modern approach. Pearson Education, Upper Saddle RiverSutton RS, Barto AG (1998) Reinforcement learning: an introduction. MIT Press, CambridgeVetsikas I, Jennings N (2010) Bidding strategies for realistic multi-unit sealed-bid auctions. J Auton Agents Multi-Agent Syst 21(2): 265–291Weisstein EW (2003) CRC concise encyclopedia of mathematics. CRC Press, Boca RatonWitten IH, Frank E (2005) Data mining: practical machine learning tools and techniques with Java implementations. Elsevier, AmsterdamWooldridge M (2002) An introduction to multiagent systems. Wiley, New YorkZhang S, Ye S, Makedon F, Ford J (2004) A hybrid negotiation strategy mechanism in an automated negotiation system. In: ACM conference on electronic commerce. ACM, pp 256–25

    The allure of rock crystal in Copper Age southern Iberia: Technical skill and distinguished objects from Valencina de la Concepción (Seville, Spain)

    No full text
    corecore